With the rapid development of deep generative models (such as Generative Adversarial Networks and Auto-encoders), AI-synthesized images of the human face are now of such high quality that humans can hardly distinguish them from pristine ones. Although existing detection methods have shown high performance in specific evaluation settings, e.g., on images from seen models or on images without real-world post-processings, they tend to suffer serious performance degradation in real-world scenarios where testing images can be generated by more powerful generation models or combined with various post-processing operations. To address this issue, we propose a Global and Local Feature Fusion (GLFF) to learn rich and discriminative representations by combining multi-scale global features from the whole image with refined local features from informative patches for face forgery detection. GLFF fuses information from two branches: the global branch to extract multi-scale semantic features and the local branch to select informative patches for detailed local artifacts extraction. Due to the lack of a face forgery dataset simulating real-world applications for evaluation, we further create a challenging face forgery dataset, named DeepFakeFaceForensics (DF^3), which contains 6 state-of-the-art generation models and a variety of post-processing techniques to approach the real-world scenarios. Experimental results demonstrate the superiority of our method to the state-of-the-art methods on the proposed DF^3 dataset and three other open-source datasets.
translated by 谷歌翻译
The mainstream workflow of image recognition applications is first training one global model on the cloud for a wide range of classes and then serving numerous clients, each with heterogeneous images from a small subset of classes to be recognized. From the cloud-client discrepancies on the range of image classes, the recognition model is desired to have strong adaptiveness, intuitively by concentrating the focus on each individual client's local dynamic class subset, while incurring negligible overhead. In this work, we propose to plug a new intra-client and inter-image attention (ICIIA) module into existing backbone recognition models, requiring only one-time cloud-based training to be client-adaptive. In particular, given a target image from a certain client, ICIIA introduces multi-head self-attention to retrieve relevant images from the client's historical unlabeled images, thereby calibrating the focus and the recognition result. Further considering that ICIIA's overhead is dominated by linear projection, we propose partitioned linear projection with feature shuffling for replacement and allow increasing the number of partitions to dramatically improve efficiency without scarifying too much accuracy. We finally evaluate ICIIA using 3 different recognition tasks with 9 backbone models over 5 representative datasets. Extensive evaluation results demonstrate the effectiveness and efficiency of ICIIA. Specifically, for ImageNet-1K with the backbone models of MobileNetV3-L and Swin-B, ICIIA can improve the testing accuracy to 83.37% (+8.11%) and 88.86% (+5.28%), while adding only 1.62% and 0.02% of FLOPs, respectively.
translated by 谷歌翻译
在推荐系统中,一个常见的问题是收集到的数据中存在各种偏见,这会恶化推荐模型的概括能力,并导致预测不准确。在RS的许多任务中都研究了双重鲁棒(DR)学习,其优势是,当单个插补或单个倾向模型准确时,可以实现公正的学习。在本文中,我们提出了一个多重鲁棒(MR)估计量,该估计量可以利用多个候选的插补和倾向模型来实现无偏见。具体而言,当任何插补或倾向模型或这些模型的线性组合都是准确的时,MR估计器是公正的。理论分析表明,提出的MR是仅具有单个插补和倾向模型的DR的增强版本,并且具有较小的偏见。受到MR的概括误差的启发,我们进一步提出了一种新型的多重健壮学习方法,并稳定。我们对现实世界和半合成数据集进行了广泛的实验,这些实验证明了所提出的方法比最先进的方法的优越性。
translated by 谷歌翻译
我们提出了一个名为mmrotate的开源工具箱,该工具箱提供了基于深度学习的流行旋转对象检测算法的训练,推断和评估的连贯算法框架。mmrotate实现了18种最先进的算法,并支持三种最常用的角度定义方法。为了促进与旋转对象检测有关的问题的未来研究和工业应用,我们还提供了大量训练有素的模型和详细的基准测试,以深入了解旋转对象检测的性能。mmrotate将于https://github.com/open-mmlab/mmrotate公开发布。
translated by 谷歌翻译
准确估计深度加强学习(DRL)中的价值函数至关重要,使得代理可以执行适当的动作而不是次优。然而,现有的演员 - 评论家方法遭受低估偏差或高估偏差或多或少的偏差,这对其性能产生负面影响。在本文中,我们揭示了一种简单但有效的原则:适当的价值校正效益偏见缓解,在那里我们提出了使用任何非减小功能的广义激活的加权运算符即激活函数,作为更好的值估计的权重。特别地,我们将广义激活的加权运算符集成到价值估计中并引入一种新颖的算法,广义激活的深双重决定性政策梯度(GD3)。理论上我们表明GD3能够减轻潜在的估计偏差。我们有趣地发现,简单的激活功能导致满足性能,没有额外的技巧,并且可以有助于更快的收敛。对众多具有挑战性的连续控制任务的实验结果表明,具有任务特定激活的GD3优于普通基线方法。我们还发现了微调多项式激活功能在大部分任务中实现了卓越的结果。
translated by 谷歌翻译
可微分的架构搜索逐渐成为神经结构中的主流研究主题,以实现与早期NAS(基于EA的RL的)方法相比提高效率的能力。最近的可分辨率NAS还旨在进一步提高搜索效率,降低GPU记忆消耗,并解决“深度间隙”问题。然而,这些方法不再能够解决非微弱目标,更不用说多目标,例如性能,鲁棒性,效率和其他指标。我们提出了一个端到端的架构搜索框架,朝向非微弱的目标TND-NAS,具有在多目标NAs(MNA)中的不同NAS框架中的高效率的优点和兼容性的兼容性(MNA)。在可分辨率的NAS框架下,随着搜索空间的连续放松,TND-NAS具有在离散空间中优化的架构参数($ \ alpha $),同时通过$ \ alpha $逐步缩小超缩小的搜索策略。我们的代表性实验需要两个目标(参数,准确性),例如,我们在CIFAR10上实现了一系列高性能紧凑型架构(1.09米/ 3.3%,2.4M / 2.95%,9.57M / 2.54%)和CIFAR100(2.46 M / 18.3%,5.46 / 16.73%,12.88 / 15.20%)数据集。有利地,在现实世界的情景下(资源受限,平台专用),TND-NA可以方便地达到Pareto-Optimal解决方案。
translated by 谷歌翻译
几乎所有现有的基于面部动作编码系统的数据集包括面部动作单元(AU)强度信息使用A-E级别分层地向强度值注释。然而,面部表情连续变化,并将从一个状态变为另一个状态。因此,将局部面部AU的强度值重新播出以表示整个面部表情的变化更有效,特别是在表达转移和面部动画的领域。我们将Feafa的扩展与重新标记的DISFA数据库相结合,可在HTTPS://www.iiplab.net/feafa+ /现在提供。扩展Feafa(Feafa +)包括来自Feafa和Disfa的150个视频序列,总共230,184帧,使用表达式定量工具手动注释24重新定义AU的浮点强度值。我们还列出了针对构成和自发子集的粗略数值结果,并为AU强度回归任务提供基线比较。
translated by 谷歌翻译
多目标增强学习被广泛应用于计划和机器人操纵中。多进球强化学习的两个主要挑战是稀疏的奖励和样本效率低下。 Hindsight Experience重播(她)旨在通过进球重新标记来应对这两个挑战。但是,与她相关的作品仍然需要数百万个样本和庞大的计算。在本文中,我们提出了多步事化经验重播(MHER),并根据$ n $ step Relabeling合并了多步重新标记的回报,以提高样品效率。尽管$ n $ step Relableling具有优势,但我们从理论上和实验上证明了$ n $ step Relabeling引入的非政策$ n $步骤偏置可能会导致许多环境的性能差。为了解决上述问题,提出了两种偏差降低的MHER算法,Mher($ \ lambda $)和基于模型的Mher(Mmher)。 Mher($ \ lambda $)利用$ \ lambda $返回,而Mmher从基于模型的价值扩展中受益。对众多多目标机器人任务的实验结果表明,我们的解决方案可以成功减轻$ n $ n $步骤的偏见,并获得比她的样本效率明显更高,并且课程引导她,而她几乎没有其他计算。
translated by 谷歌翻译
In this paper, we propose a robust 3D detector, named Cross Modal Transformer (CMT), for end-to-end 3D multi-modal detection. Without explicit view transformation, CMT takes the image and point clouds tokens as inputs and directly outputs accurate 3D bounding boxes. The spatial alignment of multi-modal tokens is performed implicitly, by encoding the 3D points into multi-modal features. The core design of CMT is quite simple while its performance is impressive. CMT obtains 73.0% NDS on nuScenes benchmark. Moreover, CMT has a strong robustness even if the LiDAR is missing. Code will be released at https://github.com/junjie18/CMT.
translated by 谷歌翻译
The development of social media user stance detection and bot detection methods rely heavily on large-scale and high-quality benchmarks. However, in addition to low annotation quality, existing benchmarks generally have incomplete user relationships, suppressing graph-based account detection research. To address these issues, we propose a Multi-Relational Graph-Based Twitter Account Detection Benchmark (MGTAB), the first standardized graph-based benchmark for account detection. To our knowledge, MGTAB was built based on the largest original data in the field, with over 1.55 million users and 130 million tweets. MGTAB contains 10,199 expert-annotated users and 7 types of relationships, ensuring high-quality annotation and diversified relations. In MGTAB, we extracted the 20 user property features with the greatest information gain and user tweet features as the user features. In addition, we performed a thorough evaluation of MGTAB and other public datasets. Our experiments found that graph-based approaches are generally more effective than feature-based approaches and perform better when introducing multiple relations. By analyzing experiment results, we identify effective approaches for account detection and provide potential future research directions in this field. Our benchmark and standardized evaluation procedures are freely available at: https://github.com/GraphDetec/MGTAB.
translated by 谷歌翻译